# Low-resource training
Contentv 8B
Apache-2.0
ContentV is an efficient video generation model framework that achieves high-quality video generation with limited computing resources through a minimalist architecture, multi-stage training strategy, and cost-effective reinforcement learning framework.
Video Processing
C
ByteDance
417
25
Orpheus Bangla Tts Gguf
Apache-2.0
Fine-tuned version of Orpheus 3B TTS model for Bengali, trained with 955 audio samples, suitable for experimental Bengali speech synthesis
Speech Synthesis Other
O
asif00
55
0
MMS TTS THAI FEMALEV2
A Thai female voice text-to-speech (TTS) model based on the VITS architecture, supporting high-quality Thai speech synthesis.
Speech Synthesis Other
M
VIZINTZOR
47
0
Auroracap 7B VID Xtuner
Apache-2.0
AuroraCap is a multimodal large language model for image and video captioning, focusing on efficient and detailed video caption generation.
Video-to-Text
A
wchai
31
5
Biggie SmoLlm 0.15B Base
MIT
An upgraded version of the SmolLM-135M micro language model with 0.18B parameters, suitable for training scenarios, featuring excellent inference speed and coherence performance
Large Language Model
Transformers

B
nisten
944
235
Bitnet B1 58 Xl
MIT
BitNet b1.58 3B is a 1-bit quantized large language model trained on 100 billion tokens from the RedPajama dataset, significantly reducing computational resource requirements while maintaining performance.
Large Language Model
Transformers

B
1bitLLM
10.64k
34
Tinystories Gpt2 3M
This is a small GPT-2 model pre-trained on the TinyStories V2 dataset, featuring 3M trainable parameters and demonstrating good text generation coherence.
Large Language Model
Transformers English

T
calum
637
7
Tinystories 33M
A 33M-parameter small language model trained on the TinyStories dataset, specifically designed for generating children's stories
Large Language Model
Transformers

T
roneneldan
25.99k
97
Tinystories 1M
TinyStories-1M is a small language model trained on the TinyStories dataset, specifically designed to generate simple stories suitable for children's reading.
Large Language Model
Transformers

T
roneneldan
37.99k
49
Resnet 18 1
Tiny ImageNet is a small-scale image classification dataset designed for benchmarking and model training in computer vision tasks.
Image Classification
Transformers

R
jsli96
35
1
Firefly Bloom 1b4
Open-source Chinese conversational large language model optimized with instruction fine-tuning, specializing in Chinese cultural tasks, with 1.4B/2.6B parameters
Large Language Model
Transformers

F
YeungNLP
55
23
Whisper Large V2 Japanese 5k Steps
Apache-2.0
A speech recognition model fine-tuned on the Japanese CommonVoice dataset based on OpenAI's whisper-large-v2 model, trained for 5000 steps with a word error rate of 0.7449
Speech Recognition
Transformers Japanese

W
clu-ling
144
20
Mt5 Small Finetuned 28jan 2
Apache-2.0
A text summarization generation model fine-tuned based on google/mt5-small, supporting multilingual text summarization tasks.
Text Generation
Transformers

M
mqy
14
0
T5 Small 6 3 Hi En To En
This is a sequence-to-sequence model based on the T5-small architecture, specifically designed for translating Hindi-English mixed text (hi_en) into standard English (en).
Machine Translation
Transformers

T
sayanmandal
38
2
Wav2vec2 Base Librispeech Demo Colab
Apache-2.0
This model is a speech recognition model fine-tuned on the LibriSpeech dataset based on facebook/wav2vec2-base, suitable for English speech-to-text tasks.
Speech Recognition
Transformers

W
khanhnguyen
24
0
Part1
Apache-2.0
This model is a fine-tuned speech processing model based on facebook/wav2vec2-base, with no specific use case explicitly stated
Speech Recognition
Transformers

P
zasheza
28
0
Wav2vec2 Base Toy Train Data Masked Audio 10ms
Apache-2.0
A speech recognition model fine-tuned based on facebook/wav2vec2-base, trained on 10ms masked audio tasks
Speech Recognition
Transformers

W
scasutt
22
0
Gpt2 Small Portuguese
MIT
A Portuguese language model fine-tuned based on GPT-2 small model, trained on Portuguese Wikipedia, supporting NLP tasks like text generation
Large Language Model Other
G
pierreguillou
10.09k
45
Babyberta 2
BabyBERTa is a lightweight version of RoBERTa, trained on child-directed input and specifically designed for language acquisition research.
Large Language Model
Transformers English

B
phueb
94
0
Gpt2 Wechsel German
MIT
This model is trained using the WECHSEL method, achieving cross-lingual transfer of monolingual language models through efficient initialization of subword embeddings, specifically optimized for German.
Large Language Model
Transformers German

G
benjamin
36
4
Geppetto
GPT2 model pre-trained for Italian language, 117M parameters, trained on Italian Wikipedia and ItWac corpus
Large Language Model Other
G
LorenzoDeMattei
78.22k
15
Featured Recommended AI Models